10th World Congress in Probability and Statistics

Organized Contributed Session (live Q&A at Track 3, 10:30PM KST)

Organized 10

Random Conformal Geometry and Related Fields (Organizer: Nam-Gyu Kang)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 6:30 AM — 7:00 AM PDT

Loewner dynamics for the multiple SLE(0) process

Tom Alberts (The University of Utah)

5
Recently Peltola and Wang introduced the multiple SLE(0) process as the deterministic limit of the random multiple SLE(kappa) curves as kappa goes to zero. They prove this result by means of a “small kappa” large deviations principle, but the limiting curves also turn out to have important geometric characterizations that are independent of their relation to SLE$(\kappa)$. In particular, they show that the SLE(0) curves can be generated by a deterministic Loewner evolution driven by multiple points, and the vector field describing the evolution of these points must satisfy a particular system of algebraic equations. We show how to generate solutions to these algebraic equations in two ways: first in terms of the poles and critical points of an associated real rational function, and second via the well-known Caloger-Moser integrable system with particular initial velocities. Although our results are purely deterministic they are again motivated by taking limits of probabilistic constructions, which I will explain.

Conformal field theory for annulus SLE

Sung-Soo Byun (Seoul National University)

4
In this talk, I will present a constructive conformal field theory generated by central/background charge modifications of Gaussian free fields in a doubly conected domain and outline its connection to annulus SLE theory. Furthermore, I will explain some applications, which include Coulomb gas solutions to the null-vector equation for annulus SLE partition functions and hitting probabilities of level lines of Gaussian free fields.

Convergence of martingale observables in the massive FK-Ising model

S. C. Park (Korea Institute of Advanced Study)

3
We show the convergence of fermionic martingale observables (MO) of the FK-Ising model in massive scaling limit. We generalise, along with a recent work by Chelkak-Izyurov-Mahfouf (2021), the discrete complex analytic machinery developed in Chelkak-Smirnov (2012) for the critical isoradial setup into the massive setting. No assumptions on domain regularity or the direction of massive perturbation are imposed. We then discuss implications on the interface curves and Russo-Seymour-Welsh (RSW) type crossing estimates, as well as ongoing work on the spin model with Chelkak and Wan.

Boundary Minkowski content of multi-force-point SLE$_\kappa(\underline\rho)$ curves

Dapeng Zhan (Michigan State University)

3

Q&A for Organized Contributed Session 10

0
This talk does not have an abstract.

Session Chair

Nam-Gyu Kang (Korea Institute for Advanced Study)

Organized 30

Stochastic Adaptive Optimization Algorithms and their Applications to Neural Networks (Organizer: Miklos Rasonyi & Sotirios Sabanis)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 6:30 AM — 7:00 AM PDT

An adaptive strong order 1 method for SDEs with discontinuous drift coefficient

Larisa Yaroslavtseva (University of Passau)

3
In recent years, a number of results have been proven in the literature for strong approximation of stochastic differential equations (SDEs) with a drift coefficient that may have discontinuities in space. In many of these results it is assumed that the drift coefficient satisfies piecewise regularity conditions and the diffusion coefficient is Lipschitz continuous and non-degenerate at the discontinuity points of the drift coefficient. For scalar SDEs of that type the best L_p-error rate known so far for approximation of the solution at the final time point is 3/4 in terms of the number of evaluations of the driving Brownian motion and it is achieved by the transformed equidistant quasi-Milstein scheme. Recently in [1] it has been shown that for such SDEs the L_p-error rate 3/4 can not be improved in general by no numerical method based on evaluations of the driving Brownian motion at fixed time points. In this talk we present a numerical method based on sequential evaluations of the driving Brownian motion, which achieves an L_p-error rate of at least 1 in terms of the average number of evaluations of the driving Brownian motion for such SDEs.

The talk is based on joint work with Thomas Müller-Gronbach (University of Passau).

References

[1]. T. Müller-Gronbach, L. Yaroslavtseva. Sharp lower error bounds for strong approximation of SDEs with discontinuous drift coefficient by coupling of noise. arXiv:2010.00915.

Nonconvex optimization via TUSLA with discontinuous updating

Ying Zhang (Nanyang Technological University)

3
We study the tamed unadjusted stochastic Langevin algorithm (TUSLA) proposed in Lovas et al. (2021) in the context of nonconvex optimization. In particular, we consider the case where the objective function of the optimization problem has a superlinear and discontinuous stochastic gradient. In such a setting, nonasymptotic error bounds are provided for the TUSLA algorithm in Wasserstein-1 and Wasserstein-2 distances. The latter result enables us to further derive theoretical guarantees for the expected excess risk. Numerical experiments are presented for synthetic examples where popular algorithms, e.g. ADAM, AMSGRAD, RMSProp, and SGD, fail to find the minimizer of the objective functions due to the superlinearity and the discontinuity of the stochastic gradients, while the TUSLA algorithm converges rapidly to the optimal solution. Moreover, an example in transfer learning is provided to illustrate the applicability of the TUSLA algorithm, and its simulation results support our theoretical findings.

Approximation of stochastic equations with irregular drifts

Konstantinos Dareiotis (University of Leeds)

3
In this talk we will discuss about the rate of convergence of the Euler scheme for stochastic differential equations with irregular drifts. Our approach relies on regularisation-by-noise techniques and more specifically, on the recently developed stochastic sewing lemma. The advantages of this approach are numerous and include the derivation of improved (optimal) rates and the treatment of non-Markovian settings. We will consider drifts in Hölder and Sobolev classes, but also merely bounded and measurable. The latter is the first and at the same time optimal quantification of a convergence theorem of Gyöngy and Krylov.

This talk is based on joint works with Oleg Butkovsky, Khoa Lê, and Máté Gerencsér.

Neural SDEs: deep generative models in the diffusion limit

Maxim Raginsky (University of Illinois at Urbana-Champaign)

3
In deep generative models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent Gaussian perturbation. In this talk, based on joint work with Belinda Tzen, I will discuss the diffusion limit of such models, where we increase the number of layers while sending the step size and the noise variance to zero. I will first provide a stochastic control formulation of sampling in such generative models. Then I will show how we can quantify the expressiveness of diffusion-based generative models. Specifically, I will prove that one can efficiently sample from a wide class of terminal target distributions by choosing the drift of the latent diffusion from the class of multilayer feedforward neural nets, with the accuracy of sampling measured by the Kullback-Leibler divergence to the target distribution.

Diffusion approximations and control variates for MCMC

Eric Moulines (Ecole Polytechnique)

4
A new method is introduced for the construction of control variates to reduce the variance of additive functionals of Markov Chain Monte Carlo (MCMC) samplers. These control variates are obtained by minimizing the asymptotic variance associated with the Langevin diffusion over a family of functions. To motivate our approach, it is shown that the asymptotic variance of some well-known MCMC algorithms, including the Random Walk Metropolis and the (Metropolis) Unadjusted/Adjusted Langevin Algorithm, are well approximated by that of the Langevin diffusion. When applied to as class of linear control variates, it is established that the variance of the resulting estimators is smaller, for a given computational complexity, than the standard Monte Carlo estimator. Several examples of Bayesian inference problems support our findings.

Q&A for Organized Contributed Session 30

0
This talk does not have an abstract.

Session Chair

Sotirios Sabanis (University of Edinburgh)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.